Accelerating Volume Reconstruction with 3D Texture Hardware

نویسندگان

  • Timothy J. Cullip
  • Ulrich Neumann
چکیده

This paper describes a new method of rendering volumes that leverages the 3D texturing hardware in Silicon Graphics RealityEngine workstations. The method defines the volume data as a 3D texture and utilizes the parallel texturing hardware to perform reconstruction and resampling on polygons embedded in the texture. The resampled data on each polygon is transformed into color and opacity values and composited into the frame buffer. A 128×128×64 volume is rendered into a 512 window at over 10 frames per-second. Two alternative strategies for embedding the resampling polygons are described and their trade-offs are discussed. This method is easy to implement and we apply it to the production of digitally reconstructed radiographs as well as opacity-based volume rendered images. The generality of this approach is demonstrated by describing its application to the proposed PixelFlow graphics system. PixelFlow overcomes the lighting and volume size limitations imposed by the RealityEngine. It is expected to render 256 data sets on a 640×512 screen at over 10 frames per second 1 Fig. 1 Resampling polygon orientations Volume Boundary a) Object space sample planes b) Image space sample planes RealityEngine is a trademark of Silicon Graphics Inc. where ui are the resample values behind a pixel and d is the spacing between sample values. Note that d is constant for all samples behind a pixel, but due to perspective, it varies from pixel to pixel. The resampled ui terms are summed at each pixel, and the d factors are applied by using an additional full-screen polygon with a 2D texture corresponding to the d values required for each pixel. The summation results may be viewed directly or the exponential required to mimic a radiograph may be computed at each pixel by using a lookup table. The RealityEngine has a maximum precision of 12bits per frame buffer and texture component. The summation could easily overflow that unless the sample values are properly scaled. Our implementation maintains 12-bit volume data values in the texture memory and scales each resampled value by a user controlled "exposure" value ranging from zero to one. The scaled samples are then summed and clamped if they exceed the 12-bit range. In practice, it has been easy to find suitable exposure control settings for the data sets tested. Figure 2 shows radiograph images of 128×128×64 CT data of a human pelvis made with polygons aligned in object-space. polygons aligned with the object-space axes. Figure 1b shows resampling on polygons aligned with the image-space axes. In either case, the resampled values behind each pixel are combined to produce a color for that pixel. The combining method is often a compositing operation, but may be other operations as required by the visualization application. Polygons aligned in object-space are defined to lie within the volume and rendered with GL library calls. This method is complicated slightly by the need to reorient the sampling polygons in the plane most parallel to the view-plane as the view-point changes. This is accomplished by examining the view matrix and explicitly creating polygons for the six cases that arise [Westover91]. Polygons aligned in image-space must be clipped to the boundaries of the volume to ensure valid texture coordinates. Polygons are defined in image-space and transformed by the inverse viewing matrix into objectspace where the clipping occurs. Clipped polygons are then rendered with the usual GL library calls. In addition to using unclipped polygons, there are other advantages to using the object-space method. The texturing time for any polygon is proportional to the number of pixels it covers. A priori information about the extent of interesting features in each slice of the volume may be used to minimize the polygon size, and thus its texturing time, as a function of its location. The texture memory of the RealityEngine is limited to 1M 12-bit data points. To render larger volumes, slab subsets are loaded and rendered in succession. Texture memory may be reloaded in about 0.1 seconds. With the object-space method, rendering each slab is simple. The image-space method must render polygons multiple times, clipping them to the currently loaded volume slab. 3. Radiographs A digitally reconstructed radiograph of medical volume data is produced by combining the resampled values behind each pixel to approximate the attenuation integral pixel intensity = 1.0 exp(-∑ ui d ) (1) 2 Fig. 3 Digitally reconstructed radiographs is not normalized and normalization is an expensive process requiring a square root. Lighting without normalization is possible, but this has not yet been tried to see how serious the artifacts are. 5. Performance We consider two data sizes rendered into a 512 window. The smaller data size of 128×128×64 may be rendered at ten frames per-second using 128 polygons aligned in object-space. This equates to a processing rate of 10 million voxels per-second. In our test images we measured about 160 million pixel operations per second, where each pixel operation is a trilinear interpolation of the 3D texture components, a multiplication by a scaling or opacity factor, and a summation or composite into the frame buffer. The larger data size of 256×256×64 requires four 256×256×16 texture slabs and is rendered at 2.5 frames per-second with 256 resampling polygons. Loading texture slabs consumes less than 0.1 seconds per-slab. A progressive refinement approach would allow a user to manipulate the low-resolution data at a high frame rate, and render the high-resolution data as soon as the user allows the motion to stop. The performance is very linear with respect to the number of pixels processed. As the number of screen pixels or resampling polygons is doubled, the frame rate is halved. If more resampling polygons are used, higher quality images are obtained at the expense of lower rendering speed. 6. PixelFlow Texturing hardware is likely to be a common feature of graphics systems in the future. The PixelFlow graphics system under development at the University of North Carolina at Chapel Hill will have texturing hardware [Molnar92] that it is suitable for a variant of the polygon resampling approach described above for the RealityEngine. We propose a polygon texturing approach for the PixelFlow system that will overcome the limitations on realistic lighting and data size imposed by the RealityEngine. The texturing hardware in PixelFlow will allow 128 pixel processors to access eight arbitrarily-addressed 32-bit values in texture memory in under 500 μs. PixelFlow texturing hardware does not perform any operations on these texture values; rather, they are simply loaded into the pixel processors where a user’s program manipulates them as ordinary data. If the 32-bit values are treated as four 8-bit texture components, then three may be 4. Opacity-Based Rendering The summation of samples produces radiograph images. Compositing samples produces images with occlusion. Only one texture component is required for the linear attenuation coefficient used to produce radiographs. Two 8-bit texture components can represent the raw data and a precomputed shading coefficient. The resampled data component values are used as indices into an opacity lookup table. This lookup uses the texture hardware for speed. The shading coefficient is a function of the original data gradient and multiplies the sample opacity to produce images of shaded features as shown in figure 3. This figure shows the human pelvis data set above an image of a 4 mm volume of a chicken embryo acquired by a microscopic MRI scanner. The precomputed shading fixes the light position(s) relative to the volume. For more general lighting by sources fixed in image-space, the shade texture component must be replaced by three components containing the normalized data gradient. Unfortunately, the resampled gradient on the polygons Fig. 3 Shaded volume rendering

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Applications of Texture Mapping to Volume and Flow Visualization

We describe six visualization methods which take advantage of hardware polygon scan conversion, texture mapping, and compositing, to give interactive viewing of 3D scalar fields, and motion for 3D flows. For volume rendering, these are splatting of an optimized 3D reconstruction filter, and tetrahedral cell projection using a texture map to provide the exponential per pixel necessary for accura...

متن کامل

Rapid 3D Cone-Beam Reconstruction with the Algebraic Reconstruction Technique (ART) by Utilizing Texture Mapping Graphics Hardware

The Algebraic Reconstruction Technique (ART) reconstructs a 2D or 3D object from its projections. It has, in certain scenarios, many advantages over the more popular Filtered Backprojection approaches and has also recently been shown to perform well for 3D cone-beam reconstruction. However, so far ART’s slow speed has prohibited its routine use in clinical applications. In this paper, we devise...

متن کامل

On the Use of Graphics Hardware to Accelerate Algebraic Reconstruction Methods

The Algebraic Reconstruction Technique (ART) reconstructs a 2D or 3D object from its projections. It has, in certain scenarios, many advantages over the more popular Filtered Backprojection approaches and has also recently been shown to perform well for 3D cone-beam reconstruction. However, so far, ART’s slow speed has prohibited its routine use in clinical applications. Currently, a software i...

متن کامل

Efficient ray tracing on 3D regular grids for fast generation of digitally reconstructed radiographs in iterative tomographic reconstruction techniques

Cone beam projection is an essential and particularly time consuming part of any iterative tomographic reconstruction algorithm. On current graphics hardware especially the amount and pattern of memory accesses is a limiting factor when read-only textures cannot be used. With the final objective of accelerating iterative reconstruction techniques, a non-oversampling Joseph-like raytracing proje...

متن کامل

Accurate Dense Stereo Reconstruction using Graphics Hardware

Vertex programs and pixel shaders found in modern graphics hardware are commonly used to enhance the realism of rendered scenes. Recently these hardware facilities were exploited to obtain interactive non-photorealistic effects and to perform low-level image processing tasks like texture filtering for volume visualization. We exploit modern graphics hardware to accomplish the higher level visio...

متن کامل

Filtered Blending: A new, minimal Reconstruction Filter for Ghosting-Free Projective Texturing with Multiple Images

Whenever approximate 3D geometry is projectively texture-mapped from different directions simultaneously, annoyingly visible aliasing artifacts are the result. To prevent such ghosting in projective texturing and image-based rendering, we propose a new GPU-based rendering strategy and a new, viewdependent definition of ghosting. The algorithm is applicable to any kind of image-based rendering m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1993